Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix #17684 (concatenations involving annotated sparse/special matrices) #17900

Merged
merged 1 commit into from
Sep 14, 2016

Conversation

Sacha0
Copy link
Member

@Sacha0 Sacha0 commented Aug 8, 2016

This pull request addresses JuliaLang/LinearAlgebra.jl#350 by making concatenations involving annotated sparse/special matrices yield sparse arrays.

The tests introduced increase linalg/special's run time from ~13s to ~465s locally. Though systematic testing is nice, testing that heavy seems a bit excessive for this functionality. Paring off a few type combinations, for example testing solely combinations involving LowerTriangular rather than similar combinations for all <:AbstractTriangular (which should exercise the same code), should reduce the test time increase substantially. Thoughts @tkelman @kshyatt?

Ref. #17660. Best!

@tkelman tkelman added the sparse Sparse arrays label Aug 8, 2016
@Sacha0
Copy link
Member Author

Sacha0 commented Aug 8, 2016

Would one of the powers that be invoke Nanosoldier following CI / when Nanosoldier rides again? This PR touches some performance-sensitive code. Thanks!

@tkelman
Copy link
Contributor

tkelman commented Aug 8, 2016

There is a JULIA_TESTFULL environment variable that we put a few of the time-consuming or comprehensive tests behind. I don't think we need to run the full-factorial set of combinations every time, we could select some sparser subset.

@Sacha0
Copy link
Member Author

Sacha0 commented Aug 8, 2016

There is a JULIA_TESTFULL environment variable that we put a few of the time-consuming or comprehensive tests behind. I don't think we need to run the full-factorial set of combinations every time, we could select some sparser subset.

Perfect. I'll work that in. Thanks!

@kshyatt
Copy link
Contributor

kshyatt commented Aug 8, 2016

@Sacha0 last time I checked, it was (relatively) easy to get the benchmark suite to run on my Linux desktop if you wanted to try that. You could also add in some more sparse benchmarks since the state of that is pretty woeful. (Not to pile on more work, but you submit so many great PRs anyway...)

@Sacha0
Copy link
Member Author

Sacha0 commented Aug 8, 2016

Without JULIA_TESTFULL, linalg/special runs (locally) in under thirty seconds again. (I pared down the regularly tested annotation types to (LowerTriangular, Symmetric) and the tested sparse/special matrix types to (SparseMatrixCSC, Diagonal).)

last time I checked, it was (relatively) easy to get the benchmark suite to run on my Linux desktop if you wanted to try that.

Cheers, I will have a go at running the benchmark suite locally in the next few days (unless Nanosoldier resurfaces first).

You could also add in some more sparse benchmarks since the state of that is pretty woeful. (Not to pile on more work, but you submit so many great PRs anyway...)

Thanks for the kind words! Perhaps once I've coaxed full back into its grave I'll have a look there. Best!

@vchuravy
Copy link
Member

vchuravy commented Aug 9, 2016

AFAIK good old nanosoldier is back from vacation.

@nanosoldier runbenchmarks(ALL, vs = ":master")

@Sacha0
Copy link
Member Author

Sacha0 commented Aug 9, 2016

Travis OS X failure seems #16802 -esque? Edit: Scratch that. More probably related to discussion in #17823?

@jrevels
Copy link
Member

jrevels commented Aug 9, 2016

I'm not sure why Nanosoldier didn't respond to this one...

@nanosoldier runbenchmarks(ALL, vs = ":master")

EDIT: lol, @vchuravy you have to commit access to this repo to use Nanosoldier.

@KristofferC
Copy link
Member

KristofferC commented Aug 9, 2016

Because not contributor status.

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @jrevels

@Sacha0
Copy link
Member Author

Sacha0 commented Aug 23, 2016

Is this in shape to merge? Thanks!

@StefanKarpinski
Copy link
Member

Is this 0.5.x or 0.6? I'm guessing 0.6 since it's a behavioral change.

@StefanKarpinski
Copy link
Member

Or is it purely a performance enhancement?

@tkelman
Copy link
Contributor

tkelman commented Aug 23, 2016

changes result types of various concatenations, I don't think it's backport material

@Sacha0
Copy link
Member Author

Sacha0 commented Aug 23, 2016

Should be 0.6. Could easily break things.

@tkelman tkelman added this to the 0.6.0 milestone Aug 23, 2016
@Sacha0
Copy link
Member Author

Sacha0 commented Sep 11, 2016

Additional thoughts? Thanks!

@tkelman
Copy link
Contributor

tkelman commented Sep 12, 2016

once more for good measure @nanosoldier runbenchmarks(ALL, vs = ":master")

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @jrevels

@Sacha0
Copy link
Member Author

Sacha0 commented Sep 13, 2016

I cannot locally reproduce any of the possible regressions the second nanosoldier run indicated. Thoughts? Thanks!

@tkelman
Copy link
Contributor

tkelman commented Sep 14, 2016

Hope it's noise and see whether the nightly results look consistent.

@tkelman tkelman merged commit 30bf89f into JuliaLang:master Sep 14, 2016
@Sacha0
Copy link
Member Author

Sacha0 commented Sep 14, 2016

The nightly looks happy, so cheers!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sparse Sparse arrays
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants